-
Notifications
You must be signed in to change notification settings - Fork 210
【Hackathon 8th No.23】Improved Training of Wasserstein GANs 论文复现 #1147
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Thanks for your contribution! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
感谢提交~
当前代码直接运行时会报错,请先检查参数/代码,谢谢
import paddle.nn as nn | ||
import paddle.vision.transforms as transforms | ||
|
||
from ..models.wgan_gp import WGAN_GP |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
运行时找会不到文件,改为动态修正的路径
ROOT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
sys.path.append(ROOT_DIR)
from models.wgan_gp import WGAN_GP
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
但是这样通不过code style check
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
嗯嗯好的,那就先关注跑通吧,我现在运行好像还是跑不通的
nn.Linear(noise_dim, 512 * 4 * 4), | ||
nn.BatchNorm1D(512 * 4 * 4), | ||
nn.ReLU(), | ||
lambda x: x.reshape([-1, 512, 4, 4]), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里运行时会报错,Paddle 的 nn.Sequential 只能包含继承自 nn.Layer 的对象,不能包含 lambda 表达式,会触发 assert isinstance(layer, Layer) 错误,可以改为类似如下代码:
def __init__(self, noise_dim=100, output_channels=3):
super(CIFAR10Generator, self).__init__()
self.layers1 = nn.Sequential(
nn.Linear(noise_dim, 512 * 4 * 4),
nn.BatchNorm1D(512 * 4 * 4),
nn.ReLU(),
)
self.layers2 = nn.Sequential(
nn.Conv2DTranspose(512, 256, 4, 2, 1),
nn.BatchNorm2D(256),
nn.ReLU(),
nn.Conv2DTranspose(256, 128, 4, 2, 1),
nn.BatchNorm2D(128),
nn.ReLU(),
nn.Conv2DTranspose(128, output_channels, 4, 2, 1),
nn.Tanh(),
)
def forward(self, x):
x = self.layers1(x)
x = x.reshape([-1, 512, 4, 4])
x = self.layers2(x)
return x
real_output = self.discriminator(real_data) | ||
fake_output = self.discriminator(fake_data) | ||
|
||
gp = self.gradient_penalty(real_data, fake_data) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
运行过程中报错:RuntimeError: (Unavailable) The Op flatten_grad doesn't have any grad op. If you don't intend calculating higher order derivatives, please set create_graph
to False. (at ../paddle/fluid/eager/api/generated/eager_generated/backwards/nodes.cc:15349)
close due to the following PR is merged: |
PR types
PR changes
Describe